AMD Vice President: "Radiation 4 VR" will become an "industry disruptor"

Editor's note: AR technology has only entered the field of vision in the past two years and won widespread attention. However, in fact, AR technology has been a decade of development. A16z well-known investor Benedict Evans recently published an article reviewing the evolution of AR technology. He believes that AR has already demonstrated that it is quite outstanding, but commercial products for the mass market have not yet emerged and should not be far behind.

In February 2006, Jeff Han demonstrated the multi-touch technology in the TED talk, when the multi-touch interface was still in the experimental stage. Now we can also see the demo video online. Looking back at the video, the technology he showed was bland, and it cost $50 to buy an Android phone. Most of the audience at that time were veterans who were experienced and concerned about technology. They cheered for technology. The technology that looks dull now is amazing. A year later, Apple released the iPhone, and the technology industry re-zeroed due to multiple touches.

Looking back at the past 10 years, the development of multi-touch technology has emerged four key points. The first key point, multi-touch has become an interesting concept in the laboratory, and the second key point, the demonstration technology has appeared in public view. In the third key point, in the iPhone launch, the first viable consumer product appeared; the fourth key point, seven years later, the sales explosion, the iPhone continues to evolve, Android catch up.

Take a look at the picture below. The development is somewhat lagging behind. After the iPhone appeared in 2007, it took several years for sales to take off. Most of the revolutionary technologies have been developed in stages, and few have matured. At the same time, some systems have embarked on the wrong path, such as Western Symbian, Japan's iMode.

I think that today's AR is between the second key point and the third key point: technology has been demonstrated and it is quite outstanding. We have also seen prototype products, but commercial products for the mass market have not yet appeared, but they are very close. .

Microsoft's HoloLens is shipping. Microsoft helmets have excellent location tracking technology. They integrate computers into helmets. There are some drawbacks to this. The cost of helmets will increase, the visibility will be narrow, and the price will be 3,000 US dollars. The second generation of HoloLens will be launched in 2019. It is clear that Apple is also developing some similar products, which can be seen from the recruitment of talents, acquisitions and comments of Apple CEO. I even suspect that Apple has developed technologies for miniaturization, power, and audio. These technologies are used in Apple Watch and Airpods. These technologies are also critical in the AR field. Google, Facebook, Amazon may also develop something. There are also small businesses and startups that are involved and they are developing interesting things.

Magic Leap is also developing its own wearable technology. It has released some videos to show the capabilities of the device. The video is cool, but watching an iPhone video is totally different from using an iPhone. Similarly, watching an AR video, wearing an AR helmet, walking around, and seeing the world in front of one another are completely different. I tried it myself, not at all bad.

The first-level AR and Google Glass experience is similar, the screen allows content to surface in the immediate space, but it has no connection with the world. In fact, the concept of Google Glass is very similar to a smart watch, except that you need to look up and to the right, not to look down. The glasses give you a new screen. It doesn't feel anything about the world in front of you. If more advanced technology is introduced, the field of vision can become spherical, showing windows, 3D objects, and everything else, floating in space.

When AR becomes “real AR” or “mixed reality,” the device starts to perceive the environment. It can put images into the world. When you stop doubting, you think the image is a real reality. Unlike Google Glass, helmets always use 3D technology to map the environment and track head position. You can hang a virtual TV on the wall and the TV will still be on the wall when you move, or simply turn the entire wall into a display.

You can also place Minecraft on the coffee table and place the hill in the palm of your hand as if you were modeling clay. If other people wear the same glasses, they will see the same thing: You can put walls, conference tables on the screen, and your entire team can use them simultaneously; you and your children are in the same Minecraft map. Can control at the same time. Or, after a small robot is hidden behind a sofa, you can hide the robot there and have your child look for it. This is mixed reality. It is like a screen, or we can turn the world around us into an infinite screen.

Sometimes you want to wear a helmet to go somewhere. You yourself are still. Just use SLAM technology. You will draw the 3D surface of the room, but you don't understand it. Assuming I meet you in a network event, I look at your LinkedIn data card, it comes to your head, or sees the Salesforce records, it tells me you are a key target customer, or look at the Truecaller record, it says you want to I sell insurance. As with "Black Mirror," you may also be able to shield someone. That is, image sensors in glasses can not only draw peripheral objects, but also sense their presence.

This is the real AR: you can show the image, it is parallel to the world, and you can also become part of the image. Glasses can show you something. It may look like a smartphone. Maybe like a 2,000-foot screen. It can also split the screen into the real world and change the real world. So the experience is divided into two poles: on the one hand, you can put everything on the screen to make the whole world more fulfilling, or destroy the world; on the other hand, you can put subtle clues or changes into the world when you travel, It will translate the logo into a language you understand, but it is more than that. It can also be used to modify American English into English. If someone installs the Chrome extension package and replaces "millennial" with "snake people" (the name of the extension package), how will the MR extension change? In short, it's fun to be the most important issue we need to address.

Once the glasses have become small enough, will we wear it all day? If it is not worn, many environmental applications will not find use. You may also use watches, mobile phones, and they are always on. Glasses are only used for reading in the right environment. This can solve social problems and there is a social problem with Google Glass: take out the phone, look at the watch, or wear a pair of glasses. The signals they convey are understandable to others; if you wear Google Glass at the bar, Others are hard to understand.

As a result, we have touched on a question: Will VR and AR be integrated? It is indeed possible that VR and AR do exist and the engineering challenges they face are also related. There is a challenge to provide VR and AR experience in the same device: VR takes you to another world, it will block everything else, the edge of the glasses is sealed, AR does not need to be. The challenge for AR is to allow the world to pass through while blocking what you don't want to see. VR starts with a black screen.

Wear AR glasses and you can see people's eyes. In 10 or 20 years, many things may become a reality, but for now, the two technologies are different. In the late 1990s, we once argued a problem: Mobile Internet devices will have separate audio components or screens, plus a handset, plus a keyboard, or a keyboard with a keyboard and a screen. We were still at the time. To discuss the appearance of the equipment, it is time to: In 2007, all problems are solved on one screen. Similarly, VR and AR may take a while to explore the issue.

If something is not physically present, how do we control it and interact with it? Is VR's physical controller sufficient? Is gesture tracking good enough? Introducing multi-touch technology on smartphones means that we have direct physics Interactive experience, we touch the screen to want to touch things, do not need to move the mouse, the mouse is 1 foot or 2 feet away from the target, we can touch the AR object in the air? This interface mode is suitable for all-weather use? Magic Leap did create With a sense of depth, you feel like you are really touching something. Do you need an interface that feels hard when your hands slide over it?

Should we replace it with speech? How much restriction will speech cause? Or choose eye tracking technology? If eyeglasses can support iris tracking technology, will you look at what you want to see and then tap the watch to select it? The problems that smart phones and PCs must solve before are the same. Just like the shape problems discussed in 2000 or 1990, the answer is not clear. In fact, even the problem itself is not clear.

AR puts objects and data into the world around us. The more thoughts about this, you will find that it becomes an AI problem and it is also a physical interface problem. These two issues are equally important. When I go to you, what should I see? Show LinkedIn or Tinder messages? When should I see new messages, instant or later? When I stand outside the restaurant, I should say: "Hey, Foursquare, Is there anything to eat here?” Or is the device’s operating system automatically completed? What is the agent in the end? Is the operating system judged, or is the service just judged added, or is Google Brain's judgment in the cloud?

On this issue, Google, Apple, Microsoft, Magic Leap may have different philosophical attitudes, but in my opinion, if you want it to run perfectly, it's best to let most things happen automatically—it should be AI. You may remember Eric Raymond once said that if the computer can do something, it shouldn't ask you. If it continues to develop, the computer will see what you see and know what you are looking at. The next 10 In the year, the machine learning technology continued to develop and the entire problem layer was removed. Today, we believe that these problems must be solved manually. This is not the case in the future.

When we moved from the desktop computer's Windows, keyboard, and mouse UI to smart phone touch operations and direct interaction experience, the entire problem layer was removed and the abstraction layer changed. Where should the photo exist? Where is the car when you taxi? Which email app should I use? What is the password? The smartphone does not ask and all questions are removed. When AR arrives, we will move in the same direction: A small window appears in front of you and the smartphone app appears in the window, but the experience is much more than that. The Snapchat and Facebook desktop sites are not the same. The environment-based, invisible, AI-driven UI will change everything again.

When AR glasses increasingly understand the surrounding world (understand yourself), they see more and more things, they will transfer some of the things they see to different cloud services, specifically which service is based on the environment, use Case, application mode. Is this a face, are you talking to it? Send it to Salesforce, LinkedIn, TrueCaller, Facebook or Tinder. Is it a pair of shoes? Send it to Pinterest, Amazon or Net a Porter. If everyone feels annoyed during a meeting, do you send records to Success Factors? Similar information may pose serious privacy concerns. I once said on my blog that driverless cars constantly capture HD 3D 360-degree video, and if the city is full of driverless cars, it will become prison. If everyone wears AR glasses, how can they escape and what will happen if they are blacked out? If your networked home is blacked out, it means that you have a ghost who plays a prank if your AR glasses are black You will have hallucinations.

Finally, there is an important question worth asking: How many people will have AR equipment? Will AR become an attachment and be used by a small percentage of mobile users, just like a smart watch? Or every small British and Indonesian person The town has small shops selling AR glasses, there are dozens of options, they are from China, price 50 US dollars, these shops sell Android devices today? When broadband costs?

This question is still difficult to answer. It is a bit too early. In the late 1990s and early 2000s, we also saw some debates: Is everyone supposed to have the same mobile data device? Or some people own a smart phone, some people have a feature phone, and then they keep going down and have a simpler The device, they do not have a camera, no color screen. It is a bit like a cannon after all, and this debate seems to have been similar: Does everyone have a PC? Or do some people insist on using a word processor? There has been this kind of debate before. According to the logic of scale and general-purpose computing, the beginning was a PC, and then the smart phone became a single universal device. Today, 5 billion people own a mobile phone and 2-3 billion people own a smart phone. Obviously, most other people will catch up with the trend.

There is a new question worth thinking: Most people insist on using smart phones. Will some people (100 million, 500 million, or 1 billion?) turn to glasses as accessories, or will they become new universal devices? Regarding this question, any answer is based on imagination, not analysis. In 1995, it was once said that everyone on the planet will have a mobile phone.

Translation: Little Soldier Edit: Yang Zhifang

Infrared Optical

Infrared Optical,Zinc Selenide Lens,Optical Zinc Selenide Lens,Optical Imaging Lens

Danyang Horse Optical Co., Ltd , https://www.dyhorseoptical.com